专利摘要:
system and method for supporting different ingestion schemes for a content delivery network. according to an embodiment, a method of operating, a server computer includes receiving data from streaming media. streaming media data includes content snippets and a media description file, and the media description file includes metadata describing the content snippets. the method also includes storing the content fragments in a cache.
公开号:BR112012008217B1
申请号:R112012008217-5
申请日:2010-09-15
公开日:2021-08-10
发明作者:Hongbing Li;Peng Zhang;Yunchao Gao;Yekui Wang;Heather Hong Yu;Yue Chen
申请人:Huawei Technologies Co., Ltd;
IPC主号:
专利说明:

TECHNICAL FIELD
[001] The present invention relates generally to computer networks, and more particularly to a system and method to support different ingestion and delivery schemes for a content delivery network. BACKGROUND
[002] Internet Protocol (IP) routing was originally designed for host-to-host communication. Currently, however, a lot of Internet traffic is used for content dissemination. As the demand for content such as streaming video increases, using existing Internet infrastructure becomes more challenging, especially with regard to time- and bandwidth-intensive traffic such as audio and video media content in continuous flow.
[003] In an Internet content delivery network, ingested media content may have different file formats targeted to different audio encoders/decoders and video encoders/decoders and to different types of media clients such as computers, televisions and mobile phones. These different types of media clients generally have different requirements regarding media file formats, encoders/decoders, bitrates and so on. For example, a high definition television system requires higher image resolution than a cell phone and requires larger media files and higher bit rates. Generally speaking, when different copies of content are required for different delivery schemes, multiple copies of the content are saved on the origin server and cached on the content delivery system's edge server.
[004] The presence of multiple media files, however, results in higher network traffic and lower system performance. For example, in the presence of multiple media files, a cache of a given size will be able to store less video resulting in a higher out-of-cache ratio. From the user's perspective, this can result in periodic interruptions in streaming media.
[005] What is needed are systems and methods to improve the delivery of streaming video content. SUMMARY OF THE INVENTION
[006] According to an embodiment, a method of operating a computer server includes receiving streaming media data. Streaming media data includes content snippets and a media description file, and the media description file includes metadata describing the content snippets. The method also includes storing the content fragments in a cache.
[007] According to a further embodiment, a method of operating a computer server includes receiving source media content and processing the source media content to produce content fragments and a media description file describing the content fragments . The content fragments and the media description file are in a uniform format.
[008] According to an additional embodiment, a server system includes a gateway, a cache and a processor. The processor receives streaming media data from an input port, where the streaming media data includes content snippets and a media description file, and the media description file includes metadata describing the content snippets. The processor also stores the content fragments in a cache, combines a plurality of the content fragments from the cache to produce streaming media content of a particular configuration, and transmits the streaming media content of the particular configuration to a client of media.
[009] The foregoing has rather broadly outlined the features of some embodiments of the present invention in order that the following detailed description of the invention will be better understood. Additional features and advantages of embodiments of the invention will be described below, which form the subject of the claims of the invention. It should be understood by those skilled in the art that the specific design and modalities disclosed can be readily used as a basis for modifying or designing other structures or processes to achieve the same purposes as the present invention. It is also to be understood by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set out in the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS
[0010] For a more complete understanding of the modalities, and the advantages thereof, reference is now made to the descriptions below considered in conjunction with the attached drawings, in which:
[0011] Figure 1 illustrates a modality content delivery system;
[0012] Figure 2 illustrates a modality media pre-processing stream at a modality media input stage;
[0013] Figure 3 illustrates a modality video format;
[0014] Figure 4 illustrates a modality audio format;
[0015] Figure 5 illustrates a modality media description model;
[0016] Figure 6 illustrates a mode media data storage scheme;
[0017] Figure 7 illustrates a format of modality video fragments;
[0018] Figure 8 illustrates a modality audio fragment format;
[0019] Figure 9 illustrates a mode edge server media storage scheme;
[0020] Figure 10 illustrates a modality container file format; and
[0021] Figure 11 illustrates an example of assembling a file according to a modality. DETAILED DESCRIPTION OF ILLUSTRATIVE MODALITIES
[0022] The construction and use of modalities are discussed in detail below. It should be noted, however, that the present invention provides many applicable inventive concepts that can be incorporated in a wide variety of specific contexts. The specific embodiments discussed are merely illustrative of specific ways to construct and use the invention, and do not limit the scope of the invention.
[0023] The present invention will be described with respect to various modalities in a specific context, a system and method to support different ingestion and delivery schemes for a content delivery network. Embodiments of the invention can also be applied to other types of communication systems and networks.
[0024] In one embodiment, a system and method for supporting different ingestion and delivery schemes for content delivery network has three stages: a media input stage, a caching stage where media is delivered from the media server. source to the edge servers for caching and a media output stage. A live video source stream or file is encoded, transcoded or recoded into a video encoding format, for example H.264/AVC, and an audio stream or file is encoded, transcoded or recoded into a video encoding format. audio encoding, eg AAC, at the media input stage. To handle variations in available network bandwidth, terminal capabilities and user preferences, multiple media alternatives, eg variable bit rate video content, resolutions, frame rates and languages, are prepared for adapting to media in the media input stage. Additionally, for efficient caching and on-demand transcoding on edge servers, audio and video streams are fragmented in a synchronized mode. In the second stage, a pull or push mode is used to deliver media from the origin server to the edge servers. In either mode, media content is transported block by block, where each block is made up of one or more fragments. In one modality, media is stored on edge servers as fragments or blocks of fragments. In the media output stage, different delivery schemes such as file transfer, forward transfer, HTTP streaming and RTP/RTSP streaming are supported.
[0025] Figure 1 illustrates a content delivery system according to an embodiment of the present invention. Logically, the system has the media input stage 102, the media caching stage 104 and the media output stage 106. Physically the system has the origin server 108 and the edge server 110. In one embodiment , origin server 108 and edge server 110 are located in different parts of the network. Alternatively, servers 108 and 110 can be co-located. In one embodiment, one or more origin servers 108 may be in communication with one or more edge servers 110.
[0026] In one embodiment, the origin server 108 receives a media source, e.g., in the form of a media file and/or live content, and performs media pre-processing 112 to produce media pre-processing data. processed and description data. Preprocessed media is stored in memory 114, which may also be a hard disk or other storage device, in the form of media data and a description. In one embodiment, media preprocessing is functionally performed by processor 116. Origin server 108 transmits the preprocessed media data and description via network connection 118 to edge server 110. Network connection 118 may be a direct connection, or any type of network connection known in the art, including, but not limited to, a wired or wireless connection, Ethernet, Internet I/P connection, or other type of broadband connection.
[0027] The edge server 110 receives the pre-processed media data from the origin server 108 and caches the data 122 using the caching function 120. When necessary, the streaming transfer function 126 creates streaming data using transcoding function 128, which transcodes the pre-processed media data to a target media client format. In one embodiment, streaming and on-demand transcoding functions are performed by processor 124.
[0028] In one embodiment, to improve system management and adaptation efficiency, a uniform media format is used in the media caching stage 104. The uniform media format contains a uniform video format, an audio format uniform and a uniform container file format. For example, in one modality, H.264 (video) and Advanced Audio Encoder/Decoder (AAC) are to be used as the unified media formats. In alternative embodiments, other formats such as MPEG-2 Video and AAC can be used. At the media input stage 102, the video stream is encoded, recoded or transcoded to the uniform video format, eg H.264 format, the audio stream is encoded, recoded or transcoded to the uniform audio format , for example, the AAC format, and the container file format is transcoded to a uniform container file format. In one embodiment, the uniform container file format is in accordance with the ISO base media file format. In alternative modalities, other file formats can be used.
[0029] In one embodiment, to handle variations in available network bandwidth, terminal capabilities and user preferences, multiple media alternatives, eg variable bit rate video content, resolutions, frame rates, languages, and so on, are prepared for media adaptation at media input stage 102.
[0030] Figure 2 illustrates the modality media pre-processing stream 200 in a modality media input stage. Content separation block 202 receives media files or live content and separates the content into video content and audio content. Video content is first segmented 204 and then transcoded 206 when necessary, while audio data is first transcoded 208 when necessary and then segmented 210. The segmented and possibly transcoded video and audio data is processed by the management block. container 212, and stored in memory 216, which may be a hard disk or other storage device. Media description generation block 214 generates a media description of the audio and video segments, which is also stored in memory 216. In alternative embodiments, the order of segmenting and transcoding may differ from what is illustrated in Fig. 2 .
[0031] In one embodiment, the audio and video streams are fragmented (ie, stored as movie fragments as specified in the ISO base media file format) in a synchronized mode. In one modality, each video fragment has a fixed time duration, for example 2000 milliseconds, with an exception for the last fragment which contains the remaining video frames and may have a time duration and/or number of video frames many different. Alternatively, other fixed or unfixed time durations can be used. Each video slice contains an integer number of group of images (GOPs), for example, exactly one GOP. In one modality, each GOP is a closed GOP, meaning that the first video frame of the GOP is a random access point, and a GOP of fixed length, in either or both of time duration and number of video frames.
[0032] In one embodiment, the audio fragments are time-aligned with the video fragments as closely as possible. In some modalities, each audio fragment has an integer number of encoded audio samples. Depending on the audio sample rate, the audio fragment may not have exactly the same length of time as the corresponding video fragment. In some embodiments, fragmentation processes 204 and 210 are performed to make the time duration of audio fragments as close as possible to that of those video fragments.
[0033] In one embodiment, audio fragments are aligned to video fragments as follows. Assume that Dvi represents the duration of video fragment i, Dai(n) represents the duration of audio fragment i containing n samples, Dai(n-1) represents the duration of audio fragment i containing n-1 samples, and Dai (n+1) represents the duration of the audio fragment i containing n+1 samples. The number of audio samples contained in audio fragment i is then equal to n for which both of the following conditions are satisfied:|Dvi - Dai(n)| < |Dvi - Dai(n-1)|, (1)|Dvi - Dai(n)| < |Dvi - Dai(n+1)|. (two)
[0034] On the origin server, to support efficient file management and storage, all fragments belonging to the same content alternative are stored in a one file track according to the ISO base media file format in one modality. For each quality level of the video streams, the mode video format 300 is shown in Fig. 3. The video format 300 has the file type header 302, the media metadata 304, one or more video fragments 310 and the movie fragment random access block 320. The media metadata block 304 has the movie header 306 and the video track header 308, and each fragment 310 has a movie fragment header 312 and the data of media 314, which contain the actual video data. The film fragment random access block 320 has the track fragment random access block 322 and the film fragment random access offset block 324.
[0035] Figure 4 illustrates mode audio stream 400, which is similar to video stream format 300. Audio format 400 has a file type header 402, media metadata 404, one or more audio fragments 410 and film fragment random access block 420. Media metadata block 404 has film header 406 and video track header 408, and each fragment 410 has film fragment header 412 and media data 414, which contains the actual audio data. The film fragment random access block 420 has the track fragment random access block 422 and the film fragment random access offset block 424.
[0036] In one modality, after media pre-processing, there are video files of multiple quality levels, multiple audio files for the potential different audio encoders/decoders, audio channels, audio languages and quality levels, for example. In one modality, a video alternative and an audio alternative are stored in a file.
[0037] In one embodiment, a media description file describes the corresponding video streams and audio streams. An example media description template based on SMIL (Synchronized Media Integration Language) is illustrated in Figure 5.
[0038] Figure 6 illustrates a mode 600 storage scheme for pre-processed media data at the media input stage. One or more video files 604 and 606 and one or more audio files 608 and 610 are stored included in a single media description file 602. In one modality, each video file 604 and 606 can represent a different quality level and each audio file 608 and 610 may represent a different target audio encoder/decoder, channel language and/or bit rate. One or more precomputed container files or manifest files 612 and 614 for each delivery scheme are stored additionally included in media description file 602.
[0039] In a modality caching stage, media content is transported from the origin server to the edge server block by block, where each block is made up of one or more fragments. In one embodiment, the basic transport unit between the origin server and the edge server is an audio and/or video block. In one modality, each block of audio or video is saved as a single file and managed as a single file on the Edge Server. Alternatively, you can also store an audio block and a video block in a file on the edge server.
[0040] Fig. 7 illustrates a mode 700 video block format having the movie fragment 702 and the media data 704. In one embodiment, each video block contains a video fragment. Alternatively, more than one fragment can be used. In one embodiment, film fragment 702 is formatted according to the ISO-based media file format standard, and contains information regarding the type, size, and location of each sample in media data 704. In one embodiment, each video fragment is named “v_xx_yyyyy.frv”, where “xx” represents a two-digit video track ID, and “yyyyy” represents a five-digit fragment sequence number. For example, “v_01_00001.frv” is the first video fragment of video track 1. In alternative modalities, other formats and identification schemes can be used.
[0041] Figure 8 illustrates a mode 800 audio block format having the 802 movie fragment and the 804 media data, which is similar to the 700 video block format. In one embodiment, each video block contains an audio fragment. In one embodiment, the 802 film fragment is formatted according to the ISO-based media file format standard, and contains information regarding the type, size and location of each sample in the 804 media data. each video fragment is named “v_xx_yyyyy.fra”, where “xx” represents a two-digit audio track ID, and “yyyyy” represents a five-digit fragment sequence number. For example, “v_01_00001.fra” is the first audio fragment of audio track 1. In alternative modalities, other formats and identification schemes can be used.
[0042] Figure 9 illustrates an internal media storage scheme of mode 900 for the edge server of the caching stage. In one embodiment, content is stored included in one or more container files 904, 906, and 908, each of which is indicated in media description XML 902 and corresponds to an alternative of the content. In one embodiment, media description XML 902 is formatted in accordance with the SMIL model illustrated in Figure 5. Each container file 904, 906, and 908 has one or more associated video and/or audio fragment files. In one embodiment, fragment files 910, 912, 914, 916, 918, and 920 included in container file 1 (904) represent video and audio of a first alternative content. Fragments 922, 924 and 926 included in container file 2 (906) and fragments 928, 930, 932, 934, 936 and 938 included in container file j (908) represent fragments of other alternative content. In one modality, content alternatives are created on the origin server. Content alternatives can be generated according to various parameters such as quality of ingested content on the origin server side, and client side variables such as network bandwidth, CPU capacities, screen resolution and/or configuration. final user. In one embodiment, there are j container files and m fragments. In one embodiment, each video fragment is identified according to n_m, where n is the number of alternatives of total video quality levels, and each audio fragment is identified according to k_m, where k is the number of alternatives of full audio tracks.
[0043] In one modality, manifest files are defined for particular streaming technologies. For example, Figure 9 has the Silverlight 940 client manifest file XML, which is a Microsoft Silverlight uniform streaming formatted manifest file. In alternative modalities, other manifest files for other streaming technologies, such as Adobe Flash, can be used.
[0044] In a mode media output stage, video fragments of different quality levels and audio fragments from different encoders/decoders with different audio channels are combined to support different delivery schemes and satisfy the bandwidth requirements. different access network bandwidth and terminal capabilities.
[0045] For example, the mode of figure 9 supports Microsoft Silverlight uniform streaming. Here Silverlight Uniform Stream uses a manifest file with fragmented audio and video being delivered to the client playback device one by one, and uses quality level and start time to request specific audio and video fragments. In one modality, the quality level is mapped to the audio and video track ID, and the start time is mapped to the fragment sequence number. With the audio/video track ID and the fragment sequence number, the corresponding cached a_xx_yyyyy.fra or v_xx_yyyyy.frv fragment file is delivered directly to the Silverlight client for playback. In one embodiment, cue fragments shown in Fig. 9 contain information that can be used to aid in the formation of MPEG-2 transport stream packets out of the audio and video fragments.
[0046] Figure 10 illustrates the modality container file 1000. The container file 1000 has the file type header 1002 and the media metadata 1004. The media metadata 1004 has the movie header 1006, the track header of audio 1008 and video track header 1010.
[0047] In one embodiment, file transfer and HTTP progressive transfer, which use full interleaved file, are performed as follows. With the video fragment 700 illustrated in figure 7, the audio fragment 800 illustrated in figure 8 and the container file 1000 illustrated in figure 10, a complete MP4 file for file transfer and HTTP progressive transfer is assembled as illustrated in figure 11 Here the output stage generates the MP4 file 1102 by assembling file header and metadata information from the container file, and actual video and audio data from fragments 1104 and 1106.
[0048] In one embodiment, a unified internal encoder/decoder and container format is used. At a media input stage, a live video source stream or file is encoded, transcoded or recoded into a video encoding format, eg H.264/AVC, and any audio stream or file is encoded , transcoded or recoded into an audio encoding format, eg AAC. In the media output stage, different delivery schemes including file transfer, progressive transfer, HTTP streaming and RTP/RTSP streaming are supported by block-based video and audio blocks on the edge server.
[0049] In one embodiment, a method of storing content (with or without alternative video tracks, audio tracks) and unified metadata supports different delivery schemes. As depicted in Figure 6, some container files or manifest files can be prepared at the media input stage.
[0050] In one embodiment, a flexible storage and delivery scheme uses media stream fragmentation and a multi-tier file management scheme. At different stages of a content delivery system, the media stream is manifested in different sizes. Using such a modality, for example, allows for efficient data management and high-performance streaming. For example, in the modality system depicted in Figure 1, media data is stored as a single stream by audio or video on the origin server for ease of management, while the basic storage unit is an audio or a video block. on the edge server. The block-based storage scheme on the edge server provides real-time on-demand transcoding capability via parallel media processing.
[0051] In one embodiment, a compact media description scheme is combined with an indication rule for video fragments and audio fragments. The media description scheme enables a system to efficiently locate requested video fragments and audio fragments based on quality level and time or byte range from cache or origin server when a cache miss occurs.
[0052] Although the modalities and their advantages have been described in detail, it is to be understood that various changes, substitutions and alterations may be made in this document without departing from the spirit and scope of the invention as defined by the appended claims. Furthermore, the scope of the present application is not intended to be limited to the particular modalities of the process, mechanism, fabrication, composition of matter, devices, methods and steps described in the descriptive report. As a person of ordinary skill in the art will readily realize from the disclosure of the present invention, processes, mechanisms, fabrication, compositions of matter, devices, methods or steps, presently existing or to be developed later, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein can be used in accordance with the present invention. Accordingly, the appended claims are intended to include within their scope such processes, mechanisms, fabrication, compositions of matter, devices, methods or steps.
权利要求:
Claims (13)
[0001]
1. Method of operating a computer server, the method characterized in that it comprises: receiving streaming media data, the streaming media data comprising a first plurality of alternative content fragments, a second plurality of content fragments alternatives and a single media description file comprising metadata describing both the first plurality of alternative content fragments and the second plurality of alternative content fragments, wherein the first plurality of alternative content fragments and the second plurality of alternative content fragments carry alternate versions of the same media content; and to store the content snippets in a cache, where: the first plurality of alternative content snippets comprises a series of forward content snippets with a fixed time duration, followed by a last content snippet with a variable time duration, in that each content fragment in the first plurality of alternative content fragments corresponds to one or more alternative content fragments in the second plurality of alternative content fragments, and wherein the first plurality of alternative content fragments and the second plurality of alternative content fragments carry the same media content at different bitrates.
[0002]
2. Method according to claim 1, characterized by the fact that the streaming media data is in a uniform format.
[0003]
3. Method according to claim 1, characterized in that receiving the streaming media data comprises receiving streaming media data from one or more servers.
[0004]
4. The method of claim 1, further comprising combining the first plurality of alternative content fragments and the second plurality of alternative content fragments in the cache to produce streaming media content of a particular configuration.
[0005]
5. Method according to claim 4, characterized in that it further comprises transmitting the streaming media content of the particular configuration to a network interface.
[0006]
6. Method according to claim 1, characterized in that the streaming media data comprises video fragments and audio fragments.
[0007]
7. Method of operating a computer server, the method characterized by the fact that it comprises: receiving source media content; and processing the source media content to produce streaming media data, the streaming media data comprising a first plurality of alternative content fragments, a second plurality of alternative content fragments, and a single media description file describing both the first plurality of alternative content fragments and the second plurality of alternative content fragments, wherein the first plurality of alternative content fragments and the second plurality of alternative content fragments carry alternative versions of the same media content; The first plurality of alternative content fragments comprises a series of forward content fragments of a fixed time duration, followed by a last content fragment of a variable time duration, wherein each content fragment in the first plurality of alternative content fragments matches one or more fra alternative content fragments in the second plurality of alternative content fragments, and wherein the first plurality of alternative content fragments and the second plurality of alternative content fragments carry the same media content at different bitrates.
[0008]
8. Method according to claim 7, characterized in that processing comprises: separating the source media content into audio media content and video media content; transcoding the audio media content and media content from video to a uniform format; fragment video media content into video fragments; and fragment audio media content into audio fragments.
[0009]
9. Method according to claim 8, characterized by the fact that the audio fragments are aligned in time with the video fragments.
[0010]
10. Method according to claim 8, characterized in that transcoding further comprises: transcoding the audio media content to an AAC format; and transcode the video media content to an H.264 format.
[0011]
11. Method according to claim 7, characterized in that processing comprises: separating the source media content into audio media content and video media content; segmenting the video media content into video fragments; transcoding the audio media content to a uniform audio format; transcoding the video fragments to a uniform video format; segment the audio media content into audio fragments, the audio fragments temporally corresponding with the video fragments.
[0012]
12. Method according to claim 7, characterized by the fact that the content fragments and the media description file comprise a uniform format.
[0013]
13. Server system, characterized by the fact that it comprises: a gateway; a cache; and a processor configured to: receive streaming media data from an input port, the streaming media data comprising a first plurality of alternative content fragments, a second plurality of alternative content fragments, and a single description file. media, the media description file comprising metadata describing both the first plurality of alternative content fragments and the second plurality of alternative content fragments, and wherein the first plurality of alternative content fragments and the second plurality of alternative content fragments carry alternative versions of the same media content; store the first plurality of alternative content fragments and the second plurality of alternative content fragments in a cache, combine the first plurality of alternative content fragments and the second plurality of alternative content fragments from the cache pa To produce streaming media content of a particular configuration, transmit the streaming media content of the particular configuration to a media client, wherein the first plurality of alternative content fragments comprises a series of forward content fragments with a duration of fixed time, followed by a later content fragment of varying length of time, where each content fragment in the first plurality of alternative content fragments corresponds to one or more alternative content fragments in the second plurality of alternative content fragments, and where the first plurality of alternative content fragments and the second plurality of alternative content fragments carry the same media content at different bitrates.
类似技术:
公开号 | 公开日 | 专利标题
BR112012008217B1|2021-08-10|METHOD OF OPERATING A COMPUTER SERVER AND SERVER SYSTEM
US9351020B2|2016-05-24|On the fly transcoding of video on demand content for adaptive streaming
KR101701182B1|2017-02-01|A method for recovering content streamed into chunk
KR101983432B1|2019-05-28|Devices, systems, and methods for converting or translating dynamic adaptive streaming over http| to http live streaming|
CN107409234B|2020-12-25|Streaming based on file format using DASH format based on LCT
CA2807156C|2017-06-06|Trick modes for network streaming of coded video data
ES2788901T3|2020-10-23|Continuous multi-period content processing
TWI714602B|2021-01-01|Middleware delivery of dash client qoe metrics
ES2842589T3|2021-07-14|Multicast streaming
EP2773078B1|2017-06-07|Method, system and devices for multimedia content delivery using adaptive streaming
BR112019014070A2|2020-02-04|signaling data for prefetch support for streaming media data
EP3673661A1|2020-07-01|Method for synchronizing gops and idr-frames on multiple encoders without communication
BR112019020629A2|2020-04-22|segment types such as delimiters and addressable resource identifiers
US10019448B2|2018-07-10|Methods and systems for providing file data for media files
WO2019011430A1|2019-01-17|Fast tune-in for low latency streaming
BR112020022899A2|2021-02-23|flag, in a manifest file, missing sections of media data for streaming network
CN105900437B|2020-03-31|Communication apparatus, communication data generating method, and communication data processing method
BR112020000307A2|2020-07-14|media data processing that uses file tracks for web content
US20190149859A1|2019-05-16|Determining a time budget for transcoding of video
WO2021017958A1|2021-02-04|Video transcoding method and apparatus
Stockhammer2011|MPEG's Dynamic Adaptive Streaming over HTTP |–Enabling Formats for Video Streaming over the Open Internet
Begen et al.2018|Are the streamingformat wars over?
Navali et al.2016|Common Mezzanine Distribution Format |: For ABR TV Distribution
WO2022006229A1|2022-01-06|Streaming media data including an addressable resource index track with switching sets
BR112013002692B1|2021-10-26|METHOD AND DEVICE TO RETRIEVE MULTIMEDIA DATA, METHOD AND DEVICE TO SEND INFORMATION TO MULTIMEDIA DATA, AND COMPUTER-READABLE MEMORY
同族专利:
公开号 | 公开日
US20110087794A1|2011-04-14|
CN106170095B|2019-05-03|
CN102474504A|2012-05-23|
RU2632394C2|2017-10-04|
US8751677B2|2014-06-10|
WO2011041974A1|2011-04-14|
RU2012118695A|2013-11-20|
BR112012008217A2|2017-03-01|
EP2471271A1|2012-07-04|
CN106170095A|2016-11-30|
EP2471271A4|2012-07-04|
CN102474504B|2016-08-10|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US6262777B1|1996-11-15|2001-07-17|Futuretel, Inc.|Method and apparatus for synchronizing edited audiovisual files|
US7302490B1|2000-05-03|2007-11-27|Microsoft Corporation|Media file format to support switching between multiple timeline-altered media streams|
US6766407B1|2001-03-27|2004-07-20|Microsoft Corporation|Intelligent streaming framework|
JP4165298B2|2003-05-29|2008-10-15|株式会社日立製作所|Terminal device and communication network switching method|
US7818444B2|2004-04-30|2010-10-19|Move Networks, Inc.|Apparatus, system, and method for multi-bitrate content streaming|
KR100729224B1|2004-10-13|2007-06-19|한국전자통신연구원|Exteneded Multimedia File Structure and Multimedia File Producting Method and Multimedia File Executing Method|
US7536469B2|2004-12-10|2009-05-19|Microsoft Corporation|System and process for controlling the coding bit rate of streaming media data employing a limited number of supported coding bit rates|
EP1862008A2|2005-02-18|2007-12-05|Koninklijke Philips Electronics N.V.|Method of mutltiplexing auxiliary data in an audio/video stream|
US8788933B2|2005-12-01|2014-07-22|Nokia Corporation|Time-shifted presentation of media streams|
JP2007173987A|2005-12-19|2007-07-05|Canon Inc|Multimedia data transmission/reception system and device, or program|
US7874015B2|2006-05-12|2011-01-18|At&T Intellectual Property I, L.P.|Methods, systems, and computer program products for controlling distribution of digital content in a file sharing system using license-based verification, encoded tagging, and time-limited fragment validity|
US8966371B2|2006-09-11|2015-02-24|Apple Inc.|Metadata for providing media content|
US7743161B2|2006-10-10|2010-06-22|Ortiva Wireless, Inc.|Digital content buffer for adaptive streaming|
DE102007045741A1|2007-06-27|2009-01-08|Siemens Ag|Method and device for coding and decoding multimedia data|
US8683066B2|2007-08-06|2014-03-25|DISH Digital L.L.C.|Apparatus, system, and method for multi-bitrate content streaming|
JP4634477B2|2008-03-07|2011-02-23|レノボ・シンガポール・プライベート・リミテッド|Media file playback without interruption|
US7860996B2|2008-05-30|2010-12-28|Microsoft Corporation|Media streaming with seamless ad insertion|
US8387150B2|2008-06-27|2013-02-26|Microsoft Corporation|Segmented media content rights management|
US8909806B2|2009-03-16|2014-12-09|Microsoft Corporation|Delivering cacheable streaming media presentations|US9485299B2|2009-03-09|2016-11-01|Arris Canada, Inc.|Progressive download gateway|
US8566393B2|2009-08-10|2013-10-22|Seawell Networks Inc.|Methods and systems for scalable video chunking|
CA2767368C|2009-08-14|2013-10-08|Azuki Systems, Inc.|Method and system for unified mobile content protection|
US9792363B2|2011-02-01|2017-10-17|Vdopia, INC.|Video display method|
WO2011100901A2|2011-04-07|2011-08-25|华为技术有限公司|Method, device and system for transmitting and processing media content|
US20120278495A1|2011-04-26|2012-11-01|Research In Motion Limited|Representation grouping for http streaming|
EP2525587B1|2011-05-17|2017-07-05|Alcatel Lucent|Method for streaming video content, node in a network for monitoring video content streaming|
US9462024B2|2011-06-08|2016-10-04|Futurewei Technologies, Inc.|System and method of media content streaming with a multiplexed representation|
CN103023859B|2011-09-22|2018-06-19|中兴通讯股份有限公司|The content processing method and system of a kind of distributed business network|
US9565476B2|2011-12-02|2017-02-07|Netzyn, Inc.|Video providing textual content system and method|
US9591098B2|2012-02-01|2017-03-07|Cisco Technology, Inc.|System and method to reduce stream start-up delay for adaptive streaming|
US20130246578A1|2012-03-16|2013-09-19|Cisco Technology, Inc.|Adaptive Bit Rate Optimizations When Joining Single Profile Multicast Streams|
WO2013144981A2|2012-03-28|2013-10-03|Soumya Das|On-the-fly encoding and streaming of video data in a peer-to-peer video sharing environment|
US9246741B2|2012-04-11|2016-01-26|Google Inc.|Scalable, live transcoding with support for adaptive streaming and failover|
US9538183B2|2012-05-18|2017-01-03|Home Box Office, Inc.|Audio-visual content delivery with partial encoding of content chunks|
EP3767961A1|2012-06-12|2021-01-20|Coherent Logix, Inc.|A distributed architecture for encoding and delivering video content|
CN102710966A|2012-06-13|2012-10-03|百视通网络电视技术发展有限责任公司|Video live broadcast method and system based on HTTP |
US9197944B2|2012-08-23|2015-11-24|Disney Enterprises, Inc.|Systems and methods for high availability HTTP streaming|
EP2908535A4|2012-10-09|2016-07-06|Sharp Kk|Content transmission device, content playback device, content distribution system, method for controlling content transmission device, method for controlling content playback device, control program, and recording medium|
US10171887B2|2013-03-13|2019-01-01|Comcast Cable Communications, Llc|Methods and systems for intelligent playback|
US10382512B2|2013-03-14|2019-08-13|Microsoft Technology Licensing, Llc|Distributed fragment timestamp synchronization|
FR3004054A1|2013-03-26|2014-10-03|France Telecom|GENERATING AND RETURNING A FLOW REPRESENTATIVE OF AUDIOVISUAL CONTENT|
CN103237068B|2013-04-17|2015-11-25|北京科技大学|The differentiable stream media buffer replacing method of contents attribute in CDN-P2P|
US9148386B2|2013-04-30|2015-09-29|Cisco Technology, Inc.|Managing bandwidth allocation among flows through assignment of drop priority|
CA2914058C|2013-05-31|2021-07-13|Level 3 Communications, Llc|Storing content on a content delivery network|
CN104426915B|2013-08-19|2017-12-01|中国电信股份有限公司|Realize method, server and system that Online Music segmentation is downloaded|
US8718445B1|2013-09-03|2014-05-06|Penthera Partners, Inc.|Commercials on mobile devices|
CN104469433B|2013-09-13|2018-09-07|深圳市腾讯计算机系统有限公司|Method and device is reviewed in a kind of net cast|
US9244916B2|2013-10-01|2016-01-26|Penthera Partners, Inc.|Downloading media objects|
US9923945B2|2013-10-10|2018-03-20|Cisco Technology, Inc.|Virtual assets for on-demand content generation|
JP2015136057A|2014-01-17|2015-07-27|ソニー株式会社|Communication device, communication data generation method, and communication data processing method|
CN103957469B|2014-05-21|2017-09-15|百视通网络电视技术发展有限责任公司|Based on the Internet video-on-demand method and system for turning encapsulation in real time|
US9288510B1|2014-05-22|2016-03-15|Google Inc.|Adaptive video transcoding based on parallel chunked log analysis|
CN104023244A|2014-05-29|2014-09-03|深圳市云宙多媒体技术有限公司|Method and apparatus for slicing stream media data in CDN system|
WO2016009420A1|2014-07-13|2016-01-21|Ani-View Ltd|A system and methods thereof for generating a synchronized audio with an imagized video clip respective of a video clip|
CN104639949B|2015-03-03|2018-07-06|腾讯科技(深圳)有限公司|A kind of video source cut-in method and device|
US10057314B2|2015-04-17|2018-08-21|Telefonaktiebolaget Lm Ericsson |Dynamic packager network based ABR media distribution and delivery|
CN105872572A|2015-12-14|2016-08-17|乐视云计算有限公司|Live broadcast video processing method and device|
US10701415B2|2016-05-19|2020-06-30|Arris Enterprises Llc|Method and apparatus for segmenting data|
US10015612B2|2016-05-25|2018-07-03|Dolby Laboratories Licensing Corporation|Measurement, verification and correction of time alignment of multiple audio channels and associated metadata|
CN107452041B|2016-05-31|2020-07-31|阿里巴巴集团控股有限公司|Picture generation method and device|
US10264044B2|2016-08-29|2019-04-16|Comcast Cable Communications, Llc|Apparatus and method for sending content as chunks of data to a user device via a network|
CN107241398B|2017-05-24|2019-09-03|中广热点云科技有限公司|A kind of method for downloading video based on content distributing network|
US11012488B2|2017-12-13|2021-05-18|Verizon Patent And Licensing Inc.|Content streaming redundancy architecture|
CN112040302A|2019-06-03|2020-12-04|阿里巴巴集团控股有限公司|Video buffering method and device, electronic equipment and computer readable storage medium|
法律状态:
2019-01-08| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2020-02-27| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-03-10| B15K| Others concerning applications: alteration of classification|Free format text: A CLASSIFICACAO ANTERIOR ERA: H04L 29/06 Ipc: H04N 21/2343 (2011.01), H04L 29/06 (2006.01), H04N |
2021-06-22| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-08-10| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 15/09/2010, OBSERVADAS AS CONDICOES LEGAIS. PATENTE CONCEDIDA CONFORME ADI 5.529/DF, QUE DETERMINA A ALTERACAO DO PRAZO DE CONCESSAO. |
优先权:
申请号 | 申请日 | 专利标题
US24984809P| true| 2009-10-08|2009-10-08|
US61/249848|2009-10-08|
US12/832,828|US8751677B2|2009-10-08|2010-07-08|System and method to support different ingest and delivery schemes for a content delivery network|
US12/832828|2010-07-08|
PCT/CN2010/076958|WO2011041974A1|2009-10-08|2010-09-15|System and method to support different ingest and delivery schemes for a content delivery network|
[返回顶部]